Goto

Collaborating Authors

 ai-generated misinformation


AI-Generated Fake News Is Coming to an Election Near You

WIRED

Many years before ChatGPT was released, my research group, the University of Cambridge Social Decision-Making Laboratory, wondered whether it was possible to have neural networks generate misinformation. To achieve this, we trained ChatGPT's predecessor, GPT-2, on examples of popular conspiracy theories and then asked it to generate fake news for us. It gave us thousands of misleading but plausible-sounding news stories. A few examples: "Certain Vaccines Are Loaded With Dangerous Chemicals and Toxins," and "Government Officials Have Manipulated Stock Prices to Hide Scandals." The question was, would anyone believe these claims?


AI doomsday warnings a distraction from the danger it already poses, warns expert

The Guardian

Focusing on doomsday scenarios in artificial intelligence is a distraction that plays down immediate risks such as the large-scale generation of misinformation, according to a senior industry figure attending this week's AI safety summit. Aidan Gomez, co-author of a research paper that helped create the technology behind chatbots, said long-term risks such as existential threats to humanity from AI should be "studied and pursued", but that they could divert politicians from dealing with immediate potential harms. "I think in terms of existential risk and public policy, it isn't a productive conversation to be had," he said. "As far as public policy and where we should have the public-sector focus – or trying to mitigate the risk to the civilian population – I think it forms a distraction, away from risks that are much more tangible and immediate." Gomez is attending the two-day summit, which starts on Wednesday, as chief executive of Cohere, a North American company that makes AI tools for businesses including chatbots.


Why Bill Gates Isn't too Worried About the Risks of AI

TIME - Tech

Bill Gates outlined how he thinks about the risks from artificial intelligence (AI) in a blog post on Tuesday. While Gates remains excited by the benefits that AI could bring, he shared his thoughts on the areas of risk he hears concern about most often. In the post, titled The risks of AI are real but manageable, Gates discusses five risks from AI in particular. First, AI-generated misinformation and deepfakes could be used to scam people or even sway the results of an election. Third, AI could take people's jobs.


AI-generated misinformation likely to pose hazard in U.S. election campaigns

The Japan Times

Fast-evolving AI technology could turbocharge misinformation in U.S. political campaigns, observers say. The 2024 presidential race is expected to be the first American election that will see the widespread use of advanced tools powered by artificial intelligence that have increasingly blurred the boundaries between fact and fiction. Campaigns on both sides of the political divide are likely to harness this technology -- which is cheap, easily accessible and whose advances have vastly outpaced regulatory responses -- for voter outreach and to churn out fundraising newsletters within seconds. This could be due to a conflict with your ad-blocking or security software. Please add japantimes.co.jp and piano.io to your list of allowed sites.


Understanding AI-generated misinformation and evaluating algorithmic and human solutions

AIHub

Existing machine learning (ML) models used to detect online misinformation are less effective when matched against content created by ChatGPT or other large language models (LLMs), according to new research from Georgia Tech. Current ML models designed for, and trained on, human-written content have significant performance discrepancies in detecting paired human-generated misinformation and misinformation generated by artificial intelligence (AI) systems, said Jiawei Zhou, a PhD student in Georgia Tech's School of Interactive Computing. Zhou's paper detailing the findings has received a best paper honorable mention award at the 2023 ACM CHI Conference on Human Factors in Computing Systems. Advised by Associate Professor Munmun De Choudhury, Zhou's research demonstrates that LLMs can manipulate tone and linguistics to allow AI-generated misinformation to slip through the cracks. "We found the AI-generated misinformation carried more emotions and cognitive processing expressions than its human-created counterparts," Zhou said.